902 research outputs found
An Open-Source 7-Axis, Robotic Platform to Enable Dexterous Procedures within CT Scanners
This paper describes the design, manufacture, and performance of a highly
dexterous, low-profile, 7 Degree-of-Freedom (DOF) robotic arm for CT-guided
percutaneous needle biopsy. Direct CT guidance allows physicians to localize
tumours quickly; however, needle insertion is still performed by hand. This
system is mounted to a fully active gantry superior to the patient's head and
teleoperated by a radiologist. Unlike other similar robots, this robot's fully
serial-link approach uses a unique combination of belt and cable drives for
high-transparency and minimal-backlash, allowing for an expansive working area
and numerous approach angles to targets all while maintaining a small in-bore
cross-section of less than . Simulations verified the system's
expansive collision free work-space and ability to hit targets across the
entire chest, as required for lung cancer biopsy. Targeting error is on average
on a teleoperated accuracy task, illustrating the system's sufficient
accuracy to perform biopsy procedures. The system is designed for lung biopsies
due to the large working volume that is required for reaching peripheral lung
lesions, though, with its large working volume and small in-bore
cross-sectional area, the robotic system is effectively a general-purpose
CT-compatible manipulation device for percutaneous procedures. Finally, with
the considerable development time undertaken in designing a precise and
flexible-use system and with the desire to reduce the burden of other
researchers in developing algorithms for image-guided surgery, this system
provides open-access, and to the best of our knowledge, is the first
open-hardware image-guided biopsy robot of its kind.Comment: 8 pages, 9 figures, final submission to IROS 201
SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised Monocular Depth Estimation
Without ground truth supervision, self-supervised depth estimation can be
trapped in a local minimum due to the gradient-locality issue of the
photometric loss. In this paper, we present a framework to enhance depth by
leveraging semantic segmentation to guide the network to jump out of the local
minimum. Prior works have proposed to share encoders between these two tasks or
explicitly align them based on priors like the consistency between edges in the
depth and segmentation maps. Yet, these methods usually require ground truth or
high-quality pseudo labels, which may not be easily accessible in real-world
applications. In contrast, we investigate self-supervised depth estimation
along with a segmentation branch that is supervised with noisy labels provided
by models pre-trained with limited data. We extend parameter sharing from the
encoder to the decoder and study the influence of different numbers of shared
decoder parameters on model performance. Also, we propose to use cross-task
information to refine current depth and segmentation predictions to generate
pseudo-depth and semantic labels for training. The advantages of the proposed
method are demonstrated through extensive experiments on the KITTI benchmark
and a downstream task for endoscopic tissue deformation tracking
- …